3 research outputs found

    Image Compression Using Cascaded Neural Networks

    Get PDF
    Images are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Many techniques developed for this purpose include transform coding, vector quantization and neural networks. In this thesis, a new neural network method is used to achieve image compression. This work extends the use of 2-layer neural networks to a combination of cascaded networks with one node in the hidden layer. A redistribution of the gray levels in the training phase is implemented in a random fashion to make the minimization of the mean square error applicable to a broad range of images. The computational complexity of this approach is analyzed in terms of overall number of weights and overall convergence. Image quality is measured objectively, using peak signal-to-noise ratio and subjectively, using perception. The effects of different image contents and compression ratios are assessed. Results show the performance superiority of cascaded neural networks compared to that of fixedarchitecture training paradigms especially at high compression ratios. The proposed new method is implemented in MATLAB. The results obtained, such as compression ratio and computing time of the compressed images, are presented

    Image Compression Using Cascaded Neural Networks

    Get PDF
    Images are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Many techniques developed for this purpose include transform coding, vector quantization and neural networks. In this thesis, a new neural network method is used to achieve image compression. This work extends the use of 2-layer neural networks to a combination of cascaded networks with one node in the hidden layer. A redistribution of the gray levels in the training phase is implemented in a random fashion to make the minimization of the mean square error applicable to a broad range of images. The computational complexity of this approach is analyzed in terms of overall number of weights and overall convergence. Image quality is measured objectively, using peak signal-to-noise ratio and subjectively, using perception. The effects of different image contents and compression ratios are assessed. Results show the performance superiority of cascaded neural networks compared to that of fixedarchitecture training paradigms especially at high compression ratios. The proposed new method is implemented in MATLAB. The results obtained, such as compression ratio and computing time of the compressed images, are presented

    Hermeneutical Injustice and Dynamic Nominalism

    No full text
    In this thesis I develop an approach to hermeneutical justice that is both preemptive and dynamic. I introduce Miranda Fricker’s work on hermeneutical injustice as well as her proposal of hermeneutical justice as a corrective, mitigating virtue. I argue that a successful response to cases of hermeneutical injustice must be preemptive in that it addresses the structural errors in our conceptual resources instead of merely shifting how listeners interact with marginalized epistemic agents. Using Ian Hacking’s Dynamic Nominalism as a framework I argue further that in addition to being preemptive, a good approach to hermeneutical justice must also take into consideration the dynamic interchange between conceptual resources and social reality. Therefore, it cannot measure success in terms of accuracy of representation between conceptual resources and social reality. Instead, a fruitful strategy toward hermeneutical justice must be forward-looking, by considering how new conceptual resources create new social realities
    corecore